Driving data center design

In the information age, data centers can be the beating heart of not just a building, but an entire global corporation. Engineers with experience working on data centers offer advice on their complex design and getting all the various aspects to compute.

By Consulting-Specifying Engineer January 29, 2015

Respondents

  • Andrew Baxter, PE, Principal/MEP Engineering Director, Page, Austin, Texas
  • Brandon Kingsley, PE, CxA, CEM Project Manager, Primary Integration Solutions Inc., Charlotte, N.C.
  • Keith Lane, PE, RCDD, NTS, RTPM, LC, LEED AP BD+C, President/Chief Engineer, Lane Coburn & Associates LLC, Seattle
  • Dwayne Miller, PE, RCDD, CEO, JBA Consulting Engineers, Hong Kong

CSE: Please describe a recent data center project you’ve worked on-share details about the project, including building location, size, owner’s project requirements (OPR), etc.

Andrew Baxter: Page recently designed a Tier III data center for a confidential Fortune 100 company located in the Chicago metropolitan area. This is one of the most efficient data centers in the transportation sector, supporting the client’s commitment to the environment. It was built to withstand severe weather conditions without compromising the integrity or security of its cooling system, which is anticipated to achieve an annual average power usage effectiveness (PUE) of 1.09. The new data center has been designed to achieve energy savings of approximately 50% above the required efficiency standards with state-of-the-art economizer systems for cooling critical electrical rooms and air-handling units, an energy recovery make-up air handling unit for ventilation, high-efficiency condensing boilers for heating, and highly efficient LED lighting. It is based on N+2 1000 kW units, and Phase 1 was designed to be a 4 MW information technology (IT) fit-up with a 12 MW total facility load at full build-out. Phase 1 of the 308,000-sq-ft project includes 25,000 sq ft of white space in an 180,000-sq-ft building. This initial phase includes the company’s backup emergency operations center (EOC) in which it will control its entire worldwide operations in the event that the main operations center is ever down. The EOC contains conference rooms and 50 workstations for various user groups, which are focused on a video wall and a control room overlook for supervision. It is anticipated that approximately 85% of the facility will be free cooled.

Brandon Kingsley: Primary Integration is currently involved in commissioning a 128 MW cloud data center that is being deployed in four sites. The intent of the OPR is to design more reliability into the IT network and equipment and less into the mechanical, electrical, plumbing (MEP), and fire protection systems. The design consists of multiple buildings, which have hot aisle containment but no raised floors and no mechanical cooling. Cold aisle allowable operating temperatures can be as high as 90 F. Instead of designing the mechanical systems based on the IT equipment, the IT equipment was designed and selected in conjunction with the mechanical systems to operate within the mechanical system parameters. The biggest commissioning challenge has been staffing and scheduling to test the sheer quantity of mechanical and electrical equipment on each site, including 160 air handlers and 21 generators; however, because of the inherent simplicity of the mechanical and electrical system designs, we are not dealing with large central chiller plants that have complex control sequences.

Keith Lane:
Lane Coburn & Associates has worked closely with Silent-Aire for more than 5 years enhancing the design of modular data center deployments around the country. There are numerous challenges and numerous benefits to the design, construction, and deployment of modular data centers. Modular data centers are designed and built as a complete system. The entire mechanical and electrical system is built around the client’s IT infrastructure needs and requirements. Modular off-site construction, as opposed to the conventional brick-and-mortar data center, delivers speed, performance, and cost containment. Building off-site in a controlled, safe, and environmentally friendly space may quite often allow for much quicker deployment. In addition to the time savings in building the mechanical, electrical, and structural components, all components and systems are tested in the factory before shipping to the site. This saves time and money during the final integrated systems testing (IST) before handover to the client in the field. Optimal performance is more quickly achieved from a modular data center versus a solution built from scratch. The design specifications are tested and verified before the unit ships, thereby delivering immediate quality assurance. There are several advantages to building offsite, but the main reasons are cost control and speed to market. Delays related to weather, site conditions, unreliable or inconsistent labor forces, and labor inefficiencies are greatly reduced or eliminated in a warehouse/prefabrication environment. This leads to reduction in cost and schedule. Additionally, estimating the total cost of the project is typically more accurate. In the end, this represents less risk to the end user. Modular data centers that are factory built can come with either a UL or ETL safety certification label that certifies they have been factory tested and meet the required electrical safety codes and requirements. This safety certification allows the modular data centers to be classified as equipment and not modular buildings, which may often circumvent permitting and inspection requirements that would normally be demanded by the authorities having jurisdiction (AHJ) in a brick-and-mortar build, thus allowing for aggressive and expedient deployments. Other benefits to the modular data centers being classified as equipment are: they can be depreciated as equipment, and opportunities for leasing or financing of the modular data centers exist as well.

Dwayne Miller:
Our most recent data center projects have been enterprise data centers for international integrated resorts. The properties are both in excess of 4 million sq ft. Both data centers tie into property infrastructure; hence the cooling, normal power, and generator back-up power are served from centralized systems. Owner requirements included on-site disaster recovery capabilities, which are addressed with primary and secondary data centers for each property. The centralized generator backup system is composed of multiple parallel engines, and the data center loads are second only to life safety systems with respect to the load priority. In addition, from a cooling standpoint, the data center is tied in to a large centralized chilled water system and is the highest priority load for the system. A combination of centralized and localized infrastructure is deployed to ensure continuity of services.

CSE: What are the newest trends in data centers in mixed-use buildings?

Kingsley: This really depends on the rack density, required reliability, and available utilities. For example, a research-based data center at a college or university may have a high rack density and high-reliability requirement. As a result, MEP designers tend to use a high-density cooling solution such as an in-row cooling. An independent cooling system may also be used rather than relying on the central chiller plant, which may be shut down in winter. Increasingly, we are seeing heat recovery systems being used in mixed-use buildings to recover waste heat from the data center and use it for the building heating system. This may make the most economic sense as an energy savings strategy in a building with a large white space data center and office space that represents a fraction of the overall cost.

Lane: Flexibility and modularity are the key features clients require in the market today. It is critical to design flexibility to modify the design for future phases and to ensure the infrastructure is in place to provide for changes. Modularity is critical to ensure incremental components can be added as density and/or redundancy increases are required. Baxter: The biggest trend is probably moving the data centers completely out of these types of buildings and having "purpose built" facilities. Heat recovery would be a newer trend, especially in regions where extended heating periods allow the heat generated by the data center to be used for building heating.

CSE: What are some challenges you have faced in coordinating structural systems with mechanical, electrical, plumbing, or fire protection systems?

Lane: Data centers are very unique facilities. The sheer amount of power and critical nature of the loads being served require significant expertise. Uninterruptable power supplies, large standby generators, fuel supplies, large conductors, medium-voltage services, large transformers, various voltages, harmonic distortion, metering, PUE, and energy efficiency all must be considered in the design of data center facilities. Because of the unique nature of the electrical load profile, the heating of underground electrical duct banks must be evaluated. This involves 3-D modeling of the underground feeders as well as a comprehensive failure mode analysis and Neher-McGrath heating calculations. The initial cost of building a data center is tremendous. The long-term costs associated with running a data center include the electrical and water services, which are very significant and must be considered during the design process. The electrical and mechanical engineers must work collaboratively to ensure the most reliable and cost-effective systems are designed and implemented. Enough design time must be built into the schedule to ensure value engineering ideas are fully vetted. Additionally, comprehensive commissioning of the data center should be provided by a third party to ensure all components of the MEP system work independently and as a system prior to actually serving critical loads.

Baxter:
Coordinating all systems together-not just the MEP and structural-can be quite a challenge for these types of facilities. The structure can be very deep to carry the higher than normal weight densities a data center can impart to a structure, especially in multi-level facilities. This, along with the large number of cable trays, electrical raceways, mechanical systems, etc., can create significant space management challenges. Add in specific project requirements such as seismic restraint, excess wind loading capabilities, or electromagnetic pulse (EMP) shielding, and final coordination becomes critical.

Kingsley: Coordination with structural systems is especially challenging in existing buildings. When a data center is added in an existing facility, the MEP systems have to be designed around fixed existing structural systems. The available floor-to-floor height may dictate the types of systems that can be installed, such as in-row cooling instead of a more conventional system using raised floors and ceiling return plenums. As commissioning agents, part of our job is to make sure that all of the MEP equipment is accessible and maintainable. Without good coordination among all systems, we may find that a cable tray, for example, is inaccessible. The increasing use of BIM in system design, engineering, and construction is improving coordination and reducing these types of problems.


CSE: How do you see the design approach for data centers changing in the next 2 to 5 years?

Miller: It’s my personal belief that proliferation and support of an Uptime Institute Tier III or Tier IV infrastructure environment, particularly for enterprise data centers, is not sustainable. In other words, the level of complexity, redundancy, energy consumption, first cost of infrastructure, and ongoing maintenance expenses are all going to make software-driven alternatives more attractive. I would suggest a more rational approach is to have a strategy wherein my data is housed and manipulated in 2, 3, or 4 geographically diverse locations. Each location is supported by reasonable infrastructure (Tier I or Tier II) with virtualization software providing seamless transfer between the sites in the event of an incident in one location. I believe this will also be the direction co-location sites will eventually take. In simple terms, a self-healing mesh within a mesh.

Kingsley: We are seeing more modular and scalable designs to provide for flexibility and phased build-out of data centers over time. Increasingly, these modular designs include a chiller plant for each phase, which are integrated to operate as a single chiller plant when the full build-out is completed. As commissioning agents, we always commission all previous phases, not only the current phase, to ensure that all phases are properly operating as an integrated system. I also think that a design approach using the strategy of higher reliability in the IT network and less in the MEP systems will become more common as owners find that they can save money by writing code rather than installing additional generators, air handlers, and electrical gear. There is also discussion of integrating the IT servers with the building automation system (BAS) to enable the servers to control the mechanical systems. This could be an effective strategy in cloud data centers, which can operate at the server’s upper limits from the start. Each server can provide input for the operation of the HVAC system, rather than relying on a few BAS sensors throughout the data center. The reliability of the communication between the servers and the BAS will need to be designed for fail-safe operation. This will likely require the installation of BAS sensors for fail-safe operation if the communication between the servers and the BAS goes down.

Baxter:
Because of the high demand for IT infrastructure, speed to market is definitely pushing the design approaches used. To this end we are seeing an increase in the amount of design-build and integrated delivery approaches used in order to shorten the time from when the project design kicks off to when the doors open on the new facility. To this end, prefabricated systems (i.e., skids) for central electrical and mechanical systems are being used more as well.

Lane: We are seeing varying levels of redundancy in modern data centers. Ten to 15 years ago, we would see enormous data centers built to the same redundancy level and the same power density throughout the entire facility. Today we are seeing single data centers and Tier II data centers with minimum redundancy for portions of the critical loads as well as Tier IV for other portions of loads. The redundancy level depends on the specific function of the computing task. Very critical loads will be built with full 2N topology (or greater), while less critical loads will be built with N or N+1 topology. These loads could be in the same room. Additionally, we are building more data centers in a modular fashion-only building the power density required today, but providing for future expansion. This includes provisions for additional uninterruptible power supply (UPS) modules, standby generators, chillers, and pumps.

CSE: How has the increase in co-location facilities changed the way you design a data center?

Baxter: I am not sure it has. I actually think what we are seeing is that the co-location facilities, especially the more purpose-built ones, are starting to look more and more like enterprise facilities.

Kingsley: As commissioning agents, Primary Integration has worked in co-location facilities for many years. We have not observed much change in the design approach, except that it is now more common to see modular design to allow for a phased build-out.

CSE: Where (geographically) are you seeing the biggest boost in the number of data centers being built? Where do you expect more data centers to be built in the next 2 to 3 years?

Kingsley: Generally, we expect to see them all over the country and around the globe. Co-location facilities typically will remain concentrated in or near larger cities. Large cloud data centers, which may require 100 acres of land or more and have large power demands, typically will be located in more remote locations a couple of hours from the nearest large city. Baxter: Inside the United States, we are seeing more in eastern Washington/Oregon, Denver/Cheyenne, Iowa, San Antonio, and the Mid-Atlantic (Virginia/North Carolina) region. Outside the United States, we are seeing growth in South America, Asia, and north/eastern Europe.

CSE: How have cloud computing, apps, cyber security, and other trends changed the way in which you design a data center?

Lane: At the enterprise level, cloud can reduce the intensity and scale of the infrastructure. The enterprise data center becomes more focused on information transport as the heavy computing happens in the cloud. For those of us who are old enough to remember mainframe days, we are seeing the migration back to a mainframe type of environment as the computing horsepower moves from the desktop/device and on-premise servers to the cloud. In simplistic terms we are migrating back toward a computing ecosystem comprised of dummy terminals in the form of desktop, PCs, and mobile devices with the new, and I would suggest an improved mainframe made up of numerous computing nodes that comprise the cloud.

Baxter: Networking systems are becoming even more important and more significant components of the data center.

Kingsley: Due to the large capacity of cloud data centers, energy efficiency is extremely important. To achieve this, data centers are being designed to operate at the upper limits of the industry standards and beyond. Increasingly, MEP systems are being designed to the nameplate operating data of the IT equipment instead of industry standards. For example, ASHRAE TC 9.9