Data processing

Since the 21st century business world relies heavily on information, data centers are one of the most crucial components of any company. Here, engineers share advice on how to handle such important projects and keep their clients’ operations running.

By Consulting-Specifying Engineer January 25, 2012

Participants

  • Daniel Kennedy, data center product manager, Tate Access Floors Inc., Jessup, Md.
  • Bill Kosik, PE, CEM, BEMP, LEED AP BD+C, principal data center energy technologist, HP Enterprise Business, Technology Services, Chicago
  • James McEnteggart, PE, vice president, Primary Integration Solutions Inc., Charlotte, N.C.
  • Brian Rener, PE, LEED AP, platform leader, electrical, MW Group, Chicago
  • John Shea, senior vice president, The RJA Group, Chicago

CSE: What challenges does a data center project pose that are unique from other structures?

Daniel Kennedy: Data centers are highly energy-intensive spaces; a typical commercial office building may have a load of 10 W/sq-ft, where a data center can often easily exceed 200 W/sq-ft. The ability to efficiently remove this heat load requires targeted and precise airflow delivery to the point of load. Unlike typical commercial office spaces, the space temperature overall in a data center isn’t of a large concern; only the temperature delivered directly to the IT equipment matters in the end.

James McEnteggart: A typical challenge is the perception that the data center is a commercial building; most projects are more like industrial facilities than office buildings. The trend toward more fully integrated monitoring and controls has introduced significant inter-system coordination issues that must be handled by the design team or a separate specialty consultant.

Brian Rener: Data centers are unique due to the extreme density of power and heating. In the next few years, new cooling and building concepts will continue to redefine what a data center looks like. Many data centers also use significant amounts of generator backup power systems, which are facing increasingly challenging environmental restrictions.

John Shea: Higher airflows are required to support the increased size and cooling demands of the newer enterprise-class of data centers. “Uptime” demands of data centers are unique to most other commercial applications.

CSE: What do you see changing in data centers in the next 3 to 5 years?

Bill Kosik: The biggest change we will see will be relating to the IT equipment and the integration of the IT and power/cooling systems. The IT equipment will operate at much higher ambient temperatures and the use of close-coupled cooling where the processor, graphics card, and memory will reject heat directly to water or some other medium. Also, there has been much talk on “power aware” IT equipment, meaning the IT equipment can make decisions on shifting workload to different machines based on a number of parameters, including temperature and available power. This strategy will continue to evolve until there is a very close communication link between the IT equipment and the power/cooling systems.

McEnteggart: I predict an increased focus on energy efficiency. A greater number of projects are incorporating multiple energy efficiency strategies and fully capitalizing on the broader environmental conditions permitted by the ASHRAE TC9.9 guideline. In addition a greater acceptance of packaged solutions and modular systems by all data center owners will occur.

Kennedy: I see a shift toward a more variable load environment, where the computing resources of the data center are pooled for private cloud computing systems. This will result in active load shedding, putting servers into standby when their resources are not required, and bringing them online when they are. This will result in a highly variable IT load profile, where today’s typical data center is just beginning to see the variable nature of their IT equipment come into play.

Rener: There are several evolving trends led by the need for improved energy efficiency, currently defined as power utilization effectiveness (PUE). Other trends involve the use of free cooling through the use of air and water-side economization. Modularity and containerization provide flexibility to adapt to rapidly changing technologies and varying reliability requirements, and then, of course, cloud computing. Current growth trends in the high-performance computing marketplace will find their way down into the cloud computing environments and will revolutionize data as we know it today. Electrical challenges that will have to be met in the next 3 to 5 years include power distribution schemes to support ever-increasing power demands with ultimate flexibility. Cooling challenges will allow for the redesign of the data center to mix air and water to achieve maximum efficiency overall.

CSE: Please describe a recent project you’ve worked on—share problems you’ve encountered, how you’ve solved them, and engineering aspects of the project you’re especially proud of.

Kennedy: We recently had a customer, a typical large collocation provider who struggles to keep blanking panels installed in its customer’s IT racks. The facilities side of the data center understands the importance of using blanking panels to reduce bypass air in its cooling systems. This facility’s need was often incompatible with the end user’s desire to have an easy-to-service IT rack, and often blanking panels would be removed and left removed after their initial install. Our company has provided directional airflow panels in this space with a standard opposed blade damper to balance the airflow from each panel to match the IT rack’s airflow consumption requirements. We determined it would be possible to build a new damper, one that had individual zones that could deliver airflow specifically to vertical zones of the IT rack. We found that most of the customer’s customers left the upper third of the rack empty or used it for networking gear, and in this area we found that blanking panels rarely ever stayed installed. This new MultiZone damper allowed the facilities group to deliver air to the IT hardware that was installed, and turn off the airflow delivery to the zones where no blanking panels were installed. This allowed for a dramatic reduction in airflow delivery to the non-blanked sections of the rack, reducing the bypass air almost entirely.

Rener: The increased power demands have led us to the use of busway for distribution. In turn, as these busways have increased in amperage and voltage, we are encountering increase fault levels. This has resulted in two issues. The manufacturers of the in rack power distribution systems are being challenged to have short-circuit ratings to match those of the busways, and the data center operators are being challenged to address arc flash requirements.

McEnteggart: Integration of monitoring systems is always a challenge. On a recent project we worked with all members of the project team to map out the integration of 20 systems, each with its own onboard controls, with three separate automation systems for monitoring and reporting. This coordination was not identified as a requirement during bidding, but all members of the team stepped up to coordinate during early construction and in field during installation and start-up.

CSE: What recommendations would you offer other engineers in maximizing the effectiveness of their data center projects?

Rener: Technology is changing all the time. Data center designs should be modular, flexible, scalable, and efficient.

McEnteggart: The use of basic lifecycle cost analysis, which includes energy and maintenance costs, is a great way to make system selection choices during early stage design. It helps balance the opposing forces of initial cost versus reliability and maintainability. Involving system integrators and commissioning teams early in the design can ensure the final design has simple operating strategies that are energy efficient and easy to maintain.

Kennedy: Air bypass is key to improving data center efficiency, regardless of cooling methodology. If the airflow paths are allowed to mix, either the IT equipment will receive air temperature above the designed level, or the cold air will be bypassed directly back to the cooling systems. Engineers should consider the reality of the systems they are designing and use systems that will remain efficient, even when in the hands of users that may not understand the implications of the changes they may make. Complex systems that require long-term management to remain efficient are likely to become less so in time due to lack of maintenance. The more automation and monitoring that can be built into a system day one, the more efficient a system is likely to remain over time.

CSE: The U.S. Dept. of Energy (DOE) has launched an initiative to help increase the energy efficiency of data centers. Why is this a concern, and how have you dealt with it in your work?

Kennedy: The efficiency goals of our customers in the data center world have prompted our firm to take an entirely new look at the products we offer into the marketplace. These new developments have been focused on increasing the efficiency of airflow delivery from the raised floor, to the IT load, and then back to the air-handling equipment.

McEnteggart: It is anticipated at the current rate of growth that data centers will comprise an increasing percentage of the nation’s electrical power in the future. If we can rein that in through intelligent design, it helps everyone. The DOE has provided several tools to help educate engineers and clients on the need for energy-efficient data centers and given guidelines on proven solutions to increase energy savings. Adding the features and solutions during design is low cost and provides ongoing benefits to the data center operator.

Rener: With increasing need for more data and the power use by the data equipment, energy efficiency is critical. PUE has been the key performance indicator for this. The DOE initiative and website are great sources for tools, resources, and programs to increase energy efficiencies. Commonly we are asked to target a PUE less than 1.3 for most of our new data center projects. We have achieved much better efficiencies than this through the use of higher operating temperatures, free cooling (non-refrigeration-based cooling), hot and cold aisle containment, eco mode UPS, and 400 V power distribution, among other solutions.

CSE: What factors do you need to take into account when designing automation and controls for a data center?

McEnteggart: Reliability and cost—many data center owners have used PLC-based controls in the past because they have excellent reliability and simple solutions to providing hot standby capabilities. The downside is these systems require a premium to supply and install. However, in the past few years DDC-based control systems had made significant improvements in reliability and stand-by capabilities while providing significant savings.

Rener: You need to factor in electrical power monitoring systems for both monitoring power quality and power events, and also for submetering and billing; also, temperature monitoring and power control schemes for events like thermal runaway.

Kennedy: The biggest concern is to ensure that when the automation or control system fails, that it fails in a safe condition that will not impact the overall reliability of the data center space. When our firm launched a line of variable air volume dampers for use in the data center, we ensured that all units would fail in the safe position due to input, controller, or power failure. Automation and control systems can and will fail, especially custom systems designed for specific data centers. Commissioning will find many of these, but designing a system that defaults to the maximum cooling or power supply configuration will result in a higher level of reliability.

CSE: How have changing HVAC and electrical/power codes and standards affected your work on data centers?

Rener: The most significant changes that will impact the market in the near future have come from ASHRAE by opening the temperature and humidity windows through the use of critical equipment data, thereby relaxing the allowable space temperatures over an acceptable duration. This allows owners to increase efficiency through acceptable risk management.

CSE: Which codes/standards prove to be most challenging in data center work?

Rener: As we look to increased temperatures with both the data center and the support spaces, we also run into issues associated with operating limits and de-rating of electrical power equipment per NEC. OSHA has yet to specifically chime in on this issue but could if the hot aisles become too hot.

CSE: What’s the most important factor to keep in mind when wrestling with codes/standards issues in a data center?

Rener: It’s what a code requires or a standard allows needs to be reviewed with the data center operations and IT folks. Higher operating temperatures need to be reviewed with the vendors of the data equipment, and also with operators who may need to work an extended time in a hotter environment. Another example: the NEC describes specific requirements for emergency power off (EPO) switches, but data center operators (and local officials) may have much different reasons or uses for an EPO system.

CSE: What experience have you had with the Green Grid Association’s report, “Recommendations for Measuring and Reporting Overall Data Center Efficiency: Version 2 – Measuring Power Usage Effectiveness at Data Centers”?

Rener: We are implementing this now in a number of our projects. This new PUE document establishes categories of PUE metrics, from PUE0 to PUE3, based on the point of measurement within the systems and whether peak or cumulative. This document also seeks to align with specific PUE calculations in other standards like U.S. Green Building Council LEED and Energy Star. The specification and design of an electrical power monitoring system is key to the PUE category you can achieve. We are recommending meters as close to the IT load as possible, not only to accurately track IT energy consumption but to also monitor power quality and allow for possible co-location submetering.

CSE: How have sustainability and PUE requirements affected how you approach electrical/power systems in data centers?

Rener: We are frequently looking at 400 V distribution to avoid transformer losses, working the mechanical engineers to operate our distribution systems within higher operating temperatures, using medium-voltage power generators, and looking at eco mode and other efficient UPS designs, and LED-based lighting solutions.

Kosik: From an energy modeling perspective, the UPS and electrical distribution system have a huge impact on the overall energy use in the data center. This is especially important when data centers are lightly loaded. Depending on the reliability topology, there could be a loss of 15% or even more. That is a big deal. In new data centers with state-of-the-art UPS equipment and electrical distribution, the loss at full load could be as low as 3%.

CSE: What trends and events have affected changes in fire detection/suppression systems in data centers?

Shea: Since 2001, the explosive growth in the Internet, social media, and most recently “cloud” data storage has resulted in explosive growth within the mid-tier and enterprise-class data centers. These very large facilities also have a much higher power load/density and a much higher cooling demand than data centers of the past. This has created a challenge in both the detection of fires as well as suppression options.

Rener: Pre-action water (dry pipe) has become the predominant suppression system, but the use of hot and cold aisle containment has required tighter coordination with head locations and densities.

McEnteggart: Many of my recent projects have opted to employ simple double-action/pre-action systems for fire protection, rather than clean agent systems. This is primarily an economic decision since the data center projects are growing in physical size and the volume of clean agent required would be a significant expense.

CSE: How have the costs and complexity of fire protection systems changed in recent years?

McEnteggart: The implementation of hot or cold aisle containment strategies can affect sprinkler layout as well as clean agent requirements. These requirements must be factored into the design of the containment solution, as well as the economic analysis used for determining if the containment solution is financially advisable.

Shea: This market has seen two primary drivers impacting the cost and complexity of fire protection systems. The first: the size and design of the enterprise-class facilities creates a design challenge for the detection systems. The trend in these systems is now towards very early warning detection systems, primarily air sampling systems, which have a faster response in high airflow environments and have more flexibility in their design and installation in these areas. They are, however, substantially more expensive than standard spot-type detection. The second driver is that costs of gaseous suppression systems have increased dramatically since the 1990s, and with the extremely large enterprise-class data centers have become cost prohibitive. In many cases this has resulted in owners opting to pre-action sprinkler protection which had historically been a less desired option in the electronic environment.

Rener: Facilities have moved away from gaseous systems and more towards pre-action. This change is not only less expensive but better for the environment. Mist and clean agent systems are available but carry a heftier price tag as opposed to water systems. Other than the need for additional sprinkler coverage within hot or cold aisle containment systems, fire protection has remained about the same.

CSE: What changes in clean agent suppression systems have you seen in data centers recently? What do you see changing in the near future?

Shea: In the past 10 years several gaseous options have emerged; however, each is at a much higher cost for gas than Halon, the historic option, and also requires much more agent to protect the same area. Trends in the area appear to be that smaller, support-type data centers will continue to use some type of gaseous agent; however, the larger mid-tier and enterprise-class data centers will not support the cost of these systems and will rely on pre-action sprinklers.

Rener: The use of clean agent systems, except in certain packaged electrical enclosures, has decreased significantly. Pre-action dry pipe water systems seem to be more widely used.

CSE: What are some important factors to consider when designing a fire and life safety system in a mixed-use building? What things often get overlooked?

Shea: In many cases, systems in a mixed-use building will be smaller data center types such as a server room or a “localized” data center. These are typically less complicated than the larger enterprise-class centers; however, they still have challenges. These can range from the design of gaseous suppression systems, the integrity or “tightness” of the data center to contain the suppression agent within the room, and also the separation of airflow between the data center and the rest of the facility.

McEnteggart: The interaction of the fire protection in the data center spaces and the remainder of the building must be carefully coordinated to ensure that an event in the balance of the building does not affect the operation of the data center.

Rener: Fire separation (rate wall assemblies) and zoning require careful consideration in mixed-used environments. Routing of wet systems (wet sprinklers, drains) that might be located in an office area above a data center, data and electrical equipment move-in and move-out paths (both space and weight considerations), security zones, and location of batteries or fuel storage are just some of the concerns in a mixed-used building with a data center.

CSE: Energy efficiency and sustainability are typically the No. 1 issue engineers face when designing data centers. What has been your experience in this area?

Kosik: On almost all of our requests for proposals and project design briefs there is a call for energy efficiency. Water efficiency comes up frequently also. Broader sustainability issues are also in the mix, either explicit or implicit based on the client’s goals. This is not only true in the U.S. In other countries there are also directives to reduce energy consumption in the power and cooling systems. For example, we are working on projects in countries such as Spain, Brazil, Hungary, Russian Federation, Canada, Norway, India, China, Czech Republic, and many others; all have project requirements for energy efficiency.

McEnteggart: The desire for energy efficiency has risen to the same level of importance as reliability for most owners, forcing the engineering community to develop new approaches to data center design. Much of the change has been in the cooling of data centers.

Kennedy: Customers now have a focus set at the higher levels of their organization to ensure that their new data center, or even existing data centers, is/will be operating at peak efficiency. This focus has resulted in the development of new products to more efficiently deliver air into the IT space. These efficiencies typically are gained with more efficient airflow delivery systems, and controls for metering and regulating the airflow delivery into the space based on demand. This more sustainable and efficient approach to airflow delivery has caused us to focus product development on these type of products for the data center.

Rener: Energy efficiency is the easiest key performance indicator to mention these days. It has almost replaced tier levels as the most commonly used metric. However, we are also seeing the need for total cost of ownership (TCO) including flexibility, modularity, and scalability. Traditionally owner operators were pushed to focus on first cost, but with the increased life expectancy of the buildings, that has increasingly been replaced by TCO modeling. However, beyond energy-efficiency initiatives there are other sustainability concerns with metrics like water and carbon usage effectiveness. In many parts of the world these metrics are being used to support water and carbon credit initiatives which could negatively impact the data center industry. Today’s power is mostly derived from carbon. Water is our next commodity and in short supply will drive up the price again, further impacting the industry. Proper planning in today’s data centers to minimize the use of water while searching for ways to focus on renewables may be the only chance we have to answer this concern before it becomes a problem.

CSE: With changing awareness of sustainability issues and increased number of products, has working on sustainable data centers become easier—or more challenging?

McEnteggart: Client awareness has made it easier for design professionals to work with clients on implementing sustainable and energy-efficient strategies.

Rener: Beyond PUE, LEED and Energy Star are now means of quantifying sustainability. Energy Star has been more out front on this; unfortunately, the U.S. Green Building Council has been slower to push out new guidelines for data centers, making LEED certification more challenging and open to interpretation.

Kennedy: I think it has become easier. The availability of many approaches to data center efficiency has created a broad market that offers many solutions, and the market is actively choosing which ones succeed and which fail based on the hard reality of actual use. This focus, now nearly 5 years old, has seen a large number of design ideas come and go, or come and only capture a relatively small part of the data center market. The availability of niche solutions means that the data center can be more flexible and meet many demands without an all-or-nothing approach.

CSE: How does the age of a structure affect your ability to retrofit or retro-commission features in data centers?

Rener: Age is not always the issue; rather, the key issues we see for retrofitting are structure capacity, clear heights, site proximity to power and IT services, and zoning issues for the backup power systems. The key for rapid retro-commissioning is good existing record drawings and operating and maintenance data.

Kennedy: Older data center spaces do offer challenges, but fortunately, given the scope of the market, many solutions have been designed to work in these space. Our company specifically found that many existing data center spaces had low raised floor heights, and due to slab-to-slab restrictions, could not increase the floor height. We designed and have been providing shallow, high-volume airflow fan-assist devices to be installed below directional airflow panels to boost airflow into areas that needed it while overcoming the typical restrictions found in existing data center spaces.

McEnteggart: Retrofitting energy conservation systems into an existing data center regardless of age is a challenge due to the need for space for ductwork, piping, and other infrastructure. In existing facilities this space is difficult to come by.

CSE: What unique requirements do data center HVAC systems have that you wouldn’t encounter on other structures?

McEnteggart: The need to move large volumes of heat at all times, requiring ventilation or air conditioning systems that are 10 to 20 times larger than would be required for an office building of comparable size. This necessitates HVAC systems that efficiently move large volumes of air and water to keep the computers operating.

Kennedy: They are becoming more similar. The use of VFD or electrically commutated (EC) fans in commercial office environments has been the norm for 20 years or more. Data centers are now headed in that way, and the ability to variably deliver air to the space. These new fan systems, coupled with variable air volume dampers at the location of supply (the IT racks) and the use of static pressure control, just like the commercial office environment, results in a highly efficient data center cooling system.

Rener: High-density internal heat loads, possible reuse of rejected heat into other uses, increasing use of water-based (rather than air) cooling to spaces and equipment, extensive use of outside air (free cooling).

CSE: How do data center projects differ by region, due to climate differences and other cooling factors?

McEnteggart: Many companies that are not constrained to locate a data center in close proximity to a location are siting data centers in geographic regions with cool, dry climates to support free and evaporative cooling, lowering the PUE of the facility.

Kosik: This is an area I have spent a great deal of time on, analyzing and reporting energy consumption of data centers by geography. If you take the identical data center and put it Budapest and Sydney, there will be big differences in energy consumption. Not because of the heat transfer across the envelope (walls, windows, roof) but rather because of the cooling system, specifically the heat rejection and economization strategies. Based on the research I have done in this area, using essentially the same data center with the cooling system and climate as the variables, the PUE values will range from 1.25 all the way up to 1.79. This is why we need to spend a great deal of time at the beginning of a project advising our clients on these types of issues, since they have such a significant long-term impact.

Rener: With increasing emphasis on free cooling for the HVAC system, a cooler climate definitely helps. Of course humidity also plays a role, not just temperature.

CSE: What advice do you have for engineers working in cooler climates with outside air systems?

Kennedy: The biggest consideration is what you do when the outside air quality exceeds the ability of the filtration system. Backup systems are a must if the site is intended to be highly reliable. I recently attended a presentation on Facebook’s data center in Prineville, Ore. They clearly stated that if the outside air becomes polluted, such as from a forest fire or chemical spill, the data center will be down in 30 min. It’s not a big deal for a social media site, but for banks and other critical systems, these failures would be unacceptable and require a backup system. Basically, consider your reliability target, and ensure you can hit it in the most efficient manner possible.

McEnteggart: Most data centers are designed without heating systems since the IT equipment generates enough heat to keep the facility in the proper operating range. However, in many facilities the population of the data center with IT equipment occurs over months or years. Make provisions for heat for the first year or two of operation; the heating system should be a low first-cost solution because its operating life will only need to be a few years.

Rener: Look at humidification issues and requirements of the IT equipment; examine how water cooling systems will operate during freezing conditions. Focus on rapid changes in temperature through the early morning hours and the overall circulation of the space.

Kosik: The way to go is to use indirect systems, such as indirect air and indirect evaporative cooling. In a cool climate you get a lot out of the cooler temperatures using heat exchangers, and you don’t have to worry about the potential complications of using direct outside air for cooling, including particulates and humidification/dehumidification. Granted, there is an efficiency loss compared to direct outside air, but it is minimal and these systems offer some really great control opportunities to maintain the required data center conditions. Depending on the specific project, we are mostly recommending indirect systems on our new data centers.

CSE: Some theories indicate that future data centers won’t need lighting or HVAC. What changes/developments do you think are in the future?

McEnteggart: The facilities will always need light as equipment will need to be serviced at night at some point; however, I suppose this could be accomplished with portable lanterns or work lights. Supplemental ventilation will always be required and is actually desirable from an energy-efficiency perspective because large fans are more energy efficient than the small fans found in servers. But the “data center of the future” raises interesting questions and challenges to designers of data centers.

Rener: We have been working on projects which use minimal or no lighting, and rely on nearly 100% “free cooling” such as simply using city water in an evaporative cooling scheme (adiabatic). Beyond no lighting or no HVAC, the question being asked is “do we even need a building at all” using outdoor containerized solutions. Containerized and cloud environments will reduce the need to enter a space for traditional preventive maintenance. The space will be designed to operate until failure and then taken down for service.