Data centers’ intricate design

Data centers are important structures that hold vital information for businesses, schools, public agencies, and private individuals. If these mission critical facilities aren’t properly designed and equipped, the gear inside and the data the servers handle is at risk.

By Consulting-Specifying Engineer April 27, 2016

Respondents

Tim Chadwick, PE, LEED AP, President, AlfaTech Consulting Engineers, San Jose, Calif.

Robert C. Eichelman, PE, LEED AP, ATD, DCEP, Technical Director, EYP Architecture & Engineering, Albany, N.Y.

Barton Hogge, PE, ATD, LEED AP, Principal, Affiliated Engineers Inc., Chapel Hill, N.C.

Bill Kosik, PE, CEM, LEED AP, BEMP, Building Energy Technologist, Chicago

Keith Lane, PE, RCDD, NTS, RTPM, LC, LEED AP BD+C, President/Chief Engineer, Lane Coburn & Associates LLC, Seattle

Robert Sty, PE, SCPM, LEDC AP, Principal, Technologies Studio Leader, SmithGroupJJR, Phoenix

Debra Vieira, PE, ATD, LEEP AP, Senior Electrical Engineer, CH2M, Portland, Ore.


CSE: What’s the No. 1 trend you see today in data center design?

Tim Chadwick: Scalability would be the top trend we have been seeing for the past 3 or more years, and it continues today. The challenge with designing for data centers is creating a facility designed to last for 15 to 20 years whose technologies will refresh or be changed out every 3 to 5 years. We are guessing at what the latest in server and storage design technologies will hold 4 to 5 years into the future. People are challenged when predicting the next generation, so looking that far out means you have to build a facility that can adjust on the fly and handle a wide variety of changes in technology or even in company growth/expansion.

Barton Hogge: That is removing as many infrastructure dependencies as possible—and seeing water as a critical utility to be treated as seriously as backup power; clients are requesting designs that have little or no dependency on water usage. Smaller-scaled and lower-density sites can achieve this with reasonable ease, while sites with high-density and HPC applications are continuing to rely on the efficiencies of water as a heat-rejection source, but are investing more often in local storage systems.

Bill Kosik: Cloud computing has really reshaped how data centers traditionally were realized. In most instances, cloud computing moves the computer power out of the customer’s facilities and into cloud computing providers. However, sensitive business-critical applications will typically remain in the customer’s facilities. In certain circumstances, the power and cooling demands in a customer’s data center facility could be reduced or filled in by other types of computer requirements. Conversely, the cloud computing providers’ data centers are growing in power and cooling demand. This requires strategic decisions on how computer, storage, and networking systems should behave under highly varying loads; there cannot be any negative impact on the business outcome, and they must operate in a highly energy- and cost-efficient manner.

Keith Lane: I see energy efficiency as the No. 1 trend. On the electrical side, we are seeing more efficient uninterruptible power supply (UPS) systems, 400/230 V system transformers, and topologies that allow for more efficient loading of the electrical components. On the mechanical side, we are seeing increased cold-aisle temperatures, increased delta T, outside-air economizers, and hot-aisle containment. On the information technology (IT) side, the 230 V electrical systems also increase the efficiency of the servers. UPS battery technology is also improving. We are seeing absorbed-glass-mat and pure-lead batteries as well as advances in battery-monitoring systems.

Robert Sty: I would say one of the latest design trends is the reduction in UPS battery storage. With standby generator technology allowing faster start-up and sync times (in many instances, less than 20 seconds), data center managers are far more comfortable moving from 15-minute battery storage to a few minutes, or even embracing flywheel technologies.

CSE: What trends should engineers be aware of for data centers or data closets in mixed-use buildings?

Robert C. Eichelman: In a mixed-use building, there are inherent risks to a data center that aren’t present in a dedicated data center facility. Steps must be taken to minimize these risks and to ensure that tenants, and the systems that support them, have the least possible impact on critical operations. To this end, all electrical and mechanical infrastructure that is required to maintain power and cooling to critical IT equipment should be dedicated to the data center. Ideally, this would include dedicated electrical services, generators, chiller plants, fuel-oil systems, and all related downstream distribution and equipment. Equipment should be located in dedicated spaces that are accessible only to authorized data center personnel.

In cases where separate electrical services are not practical, steps should be taken to ensure that faults on the tenant system do not affect the data center. This should be considered in the overcurrent protective device coordination study for the facility. Utilities that serve other tenants or the building as a whole (such as distribution piping, sanitary and roof drains, fire protection piping, electrical feeders and branch circuits, and telecommunications cabling systems) should never pass through the computer room or data center support spaces. The floor slab above the data center should be completely sealed, without any penetrations, to ensure that water does not migrate into the data center if a flooding condition were to occur on an upper floor. Security measures, beyond those that are common for a data center, should be considered at the common entrance to the facility; this could include personnel and vehicle screening, access controls, intrusion detection, and video surveillance systems. A dedicated building for a data center is always preferred.

Sty: For many of our commercial-enterprise clients, their headquarters buildings contain a main distribution facility (MDF), an independent distribution facility, and other server rooms that have similar uptime requirements to their main enterprise data centers. These requirements can drive additional mechanical, electrical, and plumbing (MEP) infrastructure for the base buildings that would not have been in the original program of requirements for the space, or not part of the original core and shell infrastructure. Our mission critical team works very closely with our office/workplace/interiors teams to help provide space planning for additional mechanical and electrical equipment. They ensure that the supporting infrastructure meets the level of uptime required by the end user.

Lane: Increased power densities and modularity of the systems. Over the years, we have seen the average kilowatt per rack increase from 1 kW/rack to more than 10 kW/rack. We are seeing much more than 10 kW/rack in some higher-density areas within the data center. Coordinating the electrical and mechanical systems as well as both the UPS battery type and code-required battery electrolyte containment/ventilation within small data closets with space limitations is critical.

Chadwick: Current generations of server or IT storage equipment can handle higher inlet-air temperatures and humidity than typical occupied spaces. These higher inlet temperatures mean that a combined HVAC system serving offices and data centers cannot be optimized for both needs. To achieve the optimum efficiencies, separate HVAC systems are needed. This has always been true, but now the higher temperatures have opened up new cooling technologies and new economizer strategies to further enhance efficiencies. The typical direct-expansion cooled computer room air conditioning (CRAC) unit is no longer the best solution in most applications.

CSE: Describe a modular data center you’ve worked on recently, including any unique challenges and their solutions.

Debra Vieira: For this project, the client required a combination of traditional raised-floor data center space with the ability to add eight modular data centers (MDCs) in the future for quick deployment of IT equipment. The data center incorporated a façade consisting of plate-metal screens with a pattern of openings to create a "moire" effect. To keep costs down, the MDCs were designed to be installed outdoors; however, they had to integrate with the architectural form of the data center. To conceal the MDCs, we designed a 2-story structure that followed the form of the data center but allowed for airflow and shading of the MDCs. However, it was not enclosed to prevent rain and dust from penetrating the structure. Within the structure, we stacked the MDCs and provided redundant power and cooling from the main data center that housed the UPSs and generation systems.

Hogge: We recently completed a greenfield prefabricated modular data center for a technical college. The facility was designed to support 90 kW of IT load across nine cabinets. The facility included in-row cooling units and hot-aisle containment for a cooling system and a modular UPS system with N+1 modules. The facility also is supported by an adjacent pad-mounted generator. The project included the bidding of the facility and the associated site work. Developing a performance bidding document that would allow multiple vendors to propose on a significant variety of approaches to meeting the project requirements—and allow flexibility for modification after installation at the site—was a challenge.

Chadwick: All of our current projects for enterprise or co-location data center clients include modularity in some way. In some cases, modular mechanical and electrical systems (or skids) are included. In other cases, we are seeing modular construction of the data center racks, rows, power, and IT distribution. Finally, modularity could be achieved with the whole data center built in containers or other modular structures that allow for rapid deployment and relocations.

Lane: We are implementing a lot of modularity into data centers. This includes modularity of the electrical components within a brick-and-mortar data center and also individual modular data center pods. Both types of systems have their place in the modern data center environment. UPS systems can be integrated with plug-and-play modularity from both the inverter/rectifier and the battery systems. Additionally, capacity and/or redundancy can be built into a well-engineered electrical distribution system to reduce upfront cost while allowing for expansion in the future. Modularity can also decrease initial power-usage effectiveness (PUE), resulting in more energy efficiency throughout the life of the data center.

Sty: Modular data centers (containerized or prefabricated solutions) can be highly effective in the right application. This could be mobile/temporary command centers or supplementing IT functions for a group located in an existing building that does not have the existing MEP infrastructure to support it. We are currently investigating the use of a single containerized system for a new network node on a university campus. The solution is very clean, efficient, and cost-effective. The challenge of a containerized approach can be in large-scale (multimegawatt) applications. Modular solutions are designed to support a certain density and cannot take advantage of the "unused" or spare capacity of the adjacent container. Because it is extremely difficult to predict the actual IT requirements of the space, there is a large potential to strand power and cooling with the containerized systems. A traditional data center has that potential as well, but by right-sizing the mechanical and electrical components, it is far less likely.

CSE: Please describe a recent data center project you’ve worked on—share details about the project including location, building type, team involved, etc.

Hogge: Recently, we completed a research data center by the adaptive reuse of a former office building that includes a life science laboratory and cryogenic sample storage spaces. With enclosed hot aisles improving operating efficiency, custom built-up, multifan arrays draw 100% outside supply-air from a shade zone and belowgrade areaway for "free cooling" during roughly 90% of annual hours. Direct evaporative cooling both supplements and extends the economizer cycle, and traditional chilled-water coils provide backup cooling for extreme weather conditions. The power system features an N+1 modular UPS, onsite diesel generation, and facilitywide metering.

Sty: We recently completed an enterprise data center for a local Native American community just outside of Phoenix. It had the usual challenges of reliability, energy efficiency, and flexible, scalable design, but the main challenge was that the location of the new data center was in a fairly prominent location on the campus. This led to concerns of physical security as well as how the facility would blend in from an architectural design, reflecting the culture and heritage of the community. Beyond meeting the goals of energy efficiency and reliability, the real success of the project resided in the selection of the general and subcontractors. The community modified the selection process from a "low bid" to "best value" to pick the most qualified team to work in conjunction with community-based contractors that did not have previous experience in construction of mission critical facilities.

Vieira: I recently finished design on the Sinnovate Technology Hub located in the King Abdullah Economic City, which is about 62 miles north of Jeddah, Saudi Arabia. Sinnovate included an administration building, an office building, a conference center, and a 3-MW data center with space for both enterprise and commercial functions. The data center incorporated modular design elements in both the power and cooling infrastructure to allow Sinnovate to scale out the facility as their IT demands grow. The most challenging design issues for this project was the summer dry-bulb temperature of 124°F, summer wet-bulb temperature of 95°F, as well as dealing with the sand and sandstorms.

Chadwick: Most data center clients we work with are highly secretive in their approaches. A few (such as Facebook through its Open Compute Project) share a lot of their design strategies and approaches. Our current major project with Facebook is their new site in Fort Worth, Texas. When completed, the building will be the fourth-largest data center in the U.S., and there is room onsite for more growth. The site deploys their typical evaporative-cooling solution and unique electrical distribution design, as summarized at the Open Compute Project website, but also incorporates incremental design changes from the last. The same team of engineers and builders have worked on the majority of Facebook’s projects, allowing our team to continually learn and improve on the next designs and builds.

CSE: Describe your experience working with the contractor, architect, owner, or other team members in creating a BIM model for a data center project.

Sty: BIM has moved from being the "latest and greatest" technology to standard practice in our industry and is an extremely powerful tool when used correctly. On the design side, we have used it to ensure coordination between systems throughout the facility, but especially on the data floor. As all of the cabling and power distribution has moved from below the raised floor up to overhead systems, the data hall gets crowded very quickly. With BIM, we have confidence in our coordination and layout of the lights, cable tray, busway, and structural support grids. The real benefit to BIM, beyond the 3-D coordination, is the added dimensions of other modeling tools. Take the BIM model and overlay computational fluid dynamic (CFD) capabilities, and we can determine where airflow to the cabinets may be blocked; therefore, we can make adjustments in the design instead of out in the field. Many of our contracting partners request the design BIM model as a starting point for creation of their own models, and recently we’ve seen facility managers ask for the model to use as the new version of their operations and maintenance manual.

Lane: Most of our data center designs now include BIM with the use of Autodesk Revit software. There is much more coordination with all the trades, as the base models are the same and build from the same template. Weekly coordination and clash detection are critical. We are also building BIM blocks into the models. This has proven to create challenges, as electrical vendors seem to lag behind architectural and mechanical systems—architectural and mechanical systems are physically larger and have been implemented into Revit before the electrical systems. Consistency and conforming naming conventions between the MEP trades are critical to ensure a smooth process.

Chadwick: Our MEP designs have been using 3-D models for more than 8 years. In the past several years, however, the design team has actively participated in the review of the field-coordinated model. Each project we complete incorporates more of the design elements into the model. Current designs are approaching American Institute of Architects Level of Detail 400 (fabrication), fully coordinated with the rest of the design team models. Our standard model elements will contain property metadata such as manufacturer, capacity, load, flow, circuit, or Internet protocol (IP) address.

Hogge: Developing BIM models for data centers has allowed our project teams to quickly review project-execution details with the major MEP trades to identify conflicts ahead of construction, helping the team streamline its prefabrication activities for piping racks and underground electrical conduit. In addition to BIM models, we are using CFD modeling to validate mechanical designs beyond the white space to include generator rooms, electrical rooms, and outdoor heat-rejection re-entrainment. This extent of modeling and visualization greatly improves the client’s understanding of the design during conceptualization.

CSE: Describe a recent retrofit of a data center building. What were the challenges and solutions?

Chadwick: We recently completed an expansion of an existing data center complex. The original data center was designed to be expanded; however, challenges with the original build and stranded power and cooling systems required a different approach to the MEP build-out. Working within the original building envelope as permitted, we were successful in taking advantage of the additional power and cooling systems while upgrading to the latest space demands.

Hogge: We were recently involved in the testing of a complete electrical infrastructure replacement of a 3-MW data center in a multipurpose facility. The largest challenge was field validation of each underfloor whip to confirm that each cabinet was actually connected to both the A and B sources, and then replacing each rack’s power distribution unit (PDU), whip, and remote power panel (RPP) without disruption. A custom portable 480/208 V PDU was assembled and used to replace each circuit systematically by closely following methods of procedure.

Sty: Facilities that have been recently constructed more than likely had expansion and growth considered on the front-end of planning and programming. Flexible, scalable, and expandable have become the norm in today’s industry. Retrofitting an older data center while it is in operation without downtime is a specialized challenge. Even a planned outage or switchover to a disaster-recovery site can be a complex process. The design team must really understand the existing system components and capacity as they develop solutions.

A fairly recent modification to a condenser-water loop in an operational co-location facility presented challenges in the cutover, due to the piping system not being in a looped configuration. The contractors were engaged early in the design process to understand the modifications and work with facilities management in developing a phasing plan for construction. We initiated practice scenarios and simulated cutovers to ensure all parties knew their role in the operation, and what the appropriate response would be in the event something happened that wasn’t part of the original plan. Another challenge was the dust and debris that was caused during construction. Barriers were installed to ensure these elements did not come into contact with the data cabinets.

CSE: Have you designed a data center using the integrated project delivery (IPD) method? If so, describe it.

Chadwick: We have not used IPD for a data center project yet, although we have several projects in the preliminary stages of discussing IPD as the best approach.

Sty: The goal of the IPD process is to leverage knowledge and expertise throughout the entire project, ultimately providing higher-quality results. The IPD method has been extremely successful for our mission critical group and our clients. Contractors we have teamed with have performed construction reviews of our design solutions, and their feedback that incurred realized savings in both capital and project schedules. The University of Utah data center had a strict budget, and working with the contractor during design (Okland Construction), the design team (SmithGroupJJR/VCBO) was able to adjust designs to meet that budget.