Data center design: HVAC systems
In the information age, data centers are one of the most critical components of a facility. If the data center isn’t reliable, business can’t be done. This reviews data center cooling and HVAC systems.
- Kevin V. Dickens, PE, LEED BD+C, Mission critical design principal, Jacobs Engineering, St. Louis
- Terrence J. Gillick, President, Primary Integration Solutions Inc., Charlotte, N.C.
- Bill Kosik, PE, CEM, BEMP, LEED AP BD+C , Principal data center energy technologist, HP Technology Services, Chicago
- Keith Lane, PE, RCDD, NTS, RTPM, LC, LEED AP BD+C, President/CEO, Lane Coburn & Associates LLC, Bothell, Wash.
- David E. Wesemann, PE, LEED AP, ATD, President, Spectrum Engineers Inc., Salt Lake City
CSE: Describe a recent modular data center project, and how you solved any cooling issues.
Gillick: We recently commissioned a project comprising four 5,000-sq-ft modular data centers and associated modular intermediate distribution frame (IDF) and UPS rooms and other support spaces. The mechanical and electrical spaces were designed with direct expansion air conditioning systems, and the white spaces with rooftop evaporative cooling units. During testing and commissioning, we found that even at their highest operating levels, the evaporative cooling units met the lowest environmental requirements of the white spaces—they were insufficiently sized to provide free cooling. The issue was resolved by replacing these with more robust industrial evaporative cooling units and associated modifications to the building automation system and setpoints to effectively get the cooling system to work as per plans and specs.
Kosik: Customers are looking for highly modular, flexible facilities. With that criterion we need to mix in reliability, energy efficiency, and first cost. The approach we have been taking is a “just in time” approach to data center capacity. The facilities are designed in modules of varying types to give the customer options on size, UPS capacity, cooling and electrical systems, and reliability levels. However, the entire planning, design, construction, and commissioning process is highly standardized to minimize design/construction time and first cost. It is a nice blend of rigorous standardization while still presenting choices to the customer on the type of flexible facility that is needed.
Dickens: For our clients in the cloud, we continue to see data centers evolve. Off-the-shelf type solutions haven’t entered into my designs as of yet, but we continue to see a progression toward building in megawatt increments. The form factor of these builds varies from custom modules (that resemble those by IO or Colt) grouped together in a warehouse type structure, on to more traditional stick built plant components matched to equally spartan structures. We still haven’t had the right opportunities to go modular in a financial, institutional, or intelligence application.
CSE: Describe how you’ve used Neher-McGrath heating calculations in a data center, and whether the facility achieved the expected outcome.
Lane: At our firm, we have a significant amount of experience providing Neher-McGrath ductbank heating calculations for data centers and other mission critical facilities. Heating calculations are recommended for mission critical facilities when large electrical duct banks with large amounts of conduits and conductors are routed in the earth. The heating calculations are performed to determine if any de-rating of the conductors is required. Where an underground electrical duct bank installation uses the configurations identified in the NFPA 70: National Electrical Code (NEC) examples, the NEC indicates in section 310-15 (b) that calculations can be accomplished to determine actual rating of the conductors. A formula provided in the NEC can be used under “engineering supervision” to provide these calculations. This formula is typically not sufficient because it does not include the effect of mutual heating between cables from other duct banks. If the intent is to use native backfill, soils samples and dryout curves testing per IEEE-442 are required to evaluate the actual value of RHO to use in the calculations. The average RHO values listed in the NEC have been used in the past, but should not be used for site-specific calculations because it has been our experience that there is no average soil. An evaluation of worst-case moisture content must also be provided. If the engineer has used values that are too low and do not represent the actual value, overheating, thermal runaway, and failure can occur. If the engineer uses overly conservative values, too many conduits will be used, resulting in more cost, space, and higher fault current levels.
CSE: What unique requirements do data center HVAC systems have that you wouldn’t encounter on other structures?
Gillick: Unlike other structures, precise temperature and humidity control within the raised floor area is the paramount goal of the data center HVAC system. Thus, specifications describe precise operating setpoints for temperature and humidity, with very limited tolerance for deviation. In addition, filtration requirements are more stringent to exclude caustic gases and airborne particulates from these spaces. We also try to identify opportunities to provide as much free cooling as possible. Additionally, we use large thermal storage systems to provide continuous backup cooling in the event of an HVAC failure. In many cases, we also provide UPS to back up the fans and pumps associated with the HVAC system in a critical facility. Moreover, redundancy of equipment and controls is significantly greater.
CSE: How do data center projects differ by region, due to climate differences and other cooling factors?
Dickens: With the expansion of the environmental parameters and the additional risk assessment data provided in the 2011 ASHRAE Whitepaper on Datacom Facilities, the reality is that a vast swath of North America is conducive to free cooling. As the restrictions within the data center environment are lessened, the physical location of the data center becomes less important. That is part of the reason that more data centers are popping up in fly-over country and migrating from the more temperate West Coast. The biggest driver of design differentiation isn’t climate; it’s mission and market sectors.
Gillick: From an HVAC perspective, data centers differ by region due to climate differences. The recent trend in the industry has been to locate data centers in locations where they can leverage the climatic conditions to gain more hours per year of free cooling from air-side and water-side economizer systems—ideally cold, dry locations, including the Pacific Northwest east of the Cascades, as well as Canada, Iceland, and Sweden.
Kosik: Climate use is just one of dozens of parameters that impact energy use in the data center. Also considering the cost of electricity and types of the local power generation source fuel, a thorough analysis will provide a much more granular view of both environmental impacts and long-term energy costs. Without this analysis there is a risk of mismatching the cooling strategy to the local climate. True, there are certain cooling systems that show very little sensitivity in energy use to different climates; these are primarily ones that don’t use an economization cycle. The good news is that there are several cooling strategies that will perform much better in some climates than others, and there are some that perform well in many climates. Using energy modeling and analytics in the early planning phases of a project will provide important data on site-specific issues that will impact both long-term operational costs and facility first costs.
CSE: What advice do you have for engineers working in cooler climates with outside air systems?
Kosik: Use energy use simulation as a first step in judging the effectiveness of different cooling systems. Oftentimes, especially in colder climates, different cooling technologies start to converge on annual energy use. So a packaged direct exchange (DX) system with an indirect evaporative energy recovery system will perform very similarly to an air-cooled chiller system with evaporative coolers for pre-cooling of return water. The point is that cooling system capital costs can be lowered because more exotic cooling technology may not be not needed to achieve similar energy use. Also, in colder climates it is not necessary to specify the use of ASHRAE classes beyond the “recommended” class. Based on the specific climate, the outdoor temperatures may never exceed the temperatures outlined in the A1–A4 classes, so with the exception of a peak temperatures that infrequently occur, there is no need to specify higher ASHRAE classes.
Gillick: First, engineers should analyze climatic trends for the opportunity to maximize air- and water-side cooling and incorporate these sustainable technologies into their HVAC designs for data centers. They will find that the payback is very attractive in lifecycle terms. This is a “given” for the leading design engineers. Then, pack warm clothes. We are commissioning a data center 60 miles south of the Arctic Circle in Sweden and another in Canada. After their arrival, our teams sent e-mails to our home office in North Carolina asking if it was okay to expense the purchase of clothing for extreme cold weather.
Dickens: First, embrace the information shared and lessons already learned by the likes of ASHRAE TC9.9 and Facebook’s Open Compute Project. There is absolutely no reason to carry forward a legacy design—in fact, it’s practically malpractice at this stage. Second, realize that the definition of a “cool climate” is no longer intuitive; it’s a calculation based on risk, indoor environment, and outdoor ambient conditions. Based on eBay’s model, the company considers Las Vegas a cool climate. And last, the path of least resistance will always lead to a design of remarkable mediocrity. Every project undertaken is an opportunity to expand your comfort zone. So do it.