Focus on data centers

Designing efficient and effective data centers and mission critical facilities is a top priority for consulting engineers. Engineers share general feedback on their work in data centers.

By Consulting-Specifying Engineer January 15, 2013

Participants

  • Cyrus Gerami, PE, LEED, CxA, Associate, Senior Project Engineer/Manager, exp Global Inc., Maitland, Fla.
  • Kerr Jonstone, IEng, MIET, Senior Electrical Engineer, CH2M Hill, Glasgow, Scotland 
  • Keith Lane, PE, RCDD/NTS, LC, LEED AP, President, Lane Coburn & Assocs., Bothell, Wash. 
  • James McEnteggart, PE, Vice President, Primary Integration Solutions Inc., Charlotte, N.C.
  • Robert M. Menuet, PE, Senior Principal, GHT Ltd., Arlington, Va. 
  • Brian Rener, PE LEED AP, Electrical Platform Leader and Quality Assurance Manager, M+W Group, Chicago, IL.

CSE: Please describe a recent project you’ve worked on.

Cyrus Gerami: I recently worked with a national collocation client on a 180,000-sq-ft Tier 2I collocation data center in Fairfax County, Va. It is a four-story building with an infrastructure cellar and eight co-location suites (N+1 MEP). The central chiller plant uses highly efficient magnetic oil-free compressors and evaporative condensers. Use of waterside and airside economizers is being closely evaluated. The facility power is backed up by eight 2.5 MW standby generators plus a ninth swing generator. In another project, for a national wireless phone company, we’re working on a 21,000-sq-ft wireless company switch room expansion using a refrigerant cooled rear-door cooling system. At another wireless phone company, we are working on an $80 million expansion and renovation of an existing data center. This facility has an N+N redundancy capacity, which includes two separate utility power services and twice as many emergency power generators. The facility has 184,000 sq ft of building area with 70,000 sq ft of raised floor. Use of computer room air handler (CRAH) with airside economizer and ultrasonic evaporative cooling was the unique feature of HVAC system. Electrical infrastructure was the first of three major phases of this project. This required that the electrical service and all main switchgear be replaced or upgraded to handle the higher projected densities for the facility IT load as well as the projected building expansion. The phase one load was projected at 15 MW of power. This necessitated installation of new 34.5 kv medium voltage switchgear with double-ended boards with unit subs at each end and six 2.5 MW generators. Concurrent with phase one was the build-out and augments of three IT suites of 5,000 sq ft each to support existing load as well as projected future IT growth. All were designed or upgraded to support 200 W/sq ft IT load. Phase three included upgrading the existing chiller systems to meet the higher load demands created.

Kerr Johnstone: Our Industrial & Advanced Technology (I&AT) group recently completed a new build collocation data center for Digital Reality Trust with the end client being Terremark in the Netherlands. The I&AT office in Glasgow, Scotland, provided the skills and expertise covering architecture, mechanical, and electrical professional design services together with project management. The network access point (NAP) of Amsterdam will serve as Terremark’s flagship facility located within the Amsterdam Airport Schiphol area, providing 25,000 sq ft of data center space featuring the latest in technological and engineering advancement. The facility is designed to meet Terremark’s highest performance criteria, offering advanced cooling, power, redundancy, and sustainability features to maintain the availability of business-critical applications while reducing energy consumption.

Keith Lane: The Sabey Quincy Data Center in Quincy, Wash., is an exciting recent project that we were heavily involved in. A brief description of the electrical topology: The MSG consists of two single-wire ground return (SWGR) with a main-tie-tie-main. Each unit substation can be fed from either of the MSG boards. The transformer switch position can be selected between MSG 1 and MSG 2 for full concurrent maintainability. Additionally, there are spare conduits brought from the MSG to the unit substations for redundancy. The system has full N+1 redundancy. Floor-mounted static transfer switches allow for full concurrent maintainability without reducing dual-cord servers to a single cord. Tier certification by the Uptime Institute was not pursued, but the data center is designed toward a Tier 3+ scenario.

James McEnteggart: We recently completed mechanical, electrical, and plumbing (MEP) commissioning of the two phases of a new data center in Sweden. This is a 370,000-sq-ft facility, which includes a 60,000-sq-ft white space designed to achieve U.S. Green Building Council LEED Gold. The design uses 100% airside economization with an evaporative cooling system and an electrical system with a 48 Vdc UPS system integrated with a 240 Vac server power supply. In winter, the hot air from the servers is even used to heat the office area. This mechanical solution also avoided the costs of buying, designing, and installing a conventional chiller system.

Rob Menuet: I’m working on a 7,500-sq-ft data center for a confidential user in an undisclosed location. It offers 2.5 MW of critical load and features chilled water direct evaporative cooling and hot aisle containment.

Brian Rener: We recently completed a 9 MW containerized data center in Washington State. The project used a common spine, with secondary power distribution equipment that fed IT containers just outside the spine. Medium-voltage equipment, including switchgear and generators, was located on the site outside spine and IT deployment areas.

CSE: How have the characteristics of mission critical facilities and data centers changed in recent years, and what should engineers expect to see in the near future?

Lane: Mission critical facility owners expect more with less. The engineer needs to stay abreast of the latest technologies and distribution scenarios to ensure the most reliability for the money. At the same time, energy efficiency as measured by both average and maximum power usage effectiveness (PUE) is critical. We are also seeing more failures of existing data center electrical duct banks. These failures are typically seen in data centers designed a decade or so ago that are now being used to their full potential. High load factors seen with new servers are heating the electrical duct banks to levels not anticipated. In most of the cases that we see, Neher-McGrath heating calculations were either not performed or not performed correctly, or the contractors did not install the duct banks with proper compaction.

Rener: There has been an increasing focus on facility costs, energy efficiency, and flexible/expandable designs. I believe we are approaching limits in what we can do on the facility side in terms of energy efficiency, commonly measured as PUE, and the focus is turning to the data equipment itself.

Menuet: Energy efficiency is increasing in importance, the environmental envelope has widened, and options for cooling delivery systems have multiplied. Data center HVAC systems are the low-hanging fruit now in the quest for improved energy performance of data centers, deservedly so. Systems and equipment have historically been targeted to maximize availability, and there is plenty of room for improvement. Next, we will see improved partnerships between engineered cooling solutions and server manufacturers. Data center environments can be improved if room cooling configurations are taken into account by the manufacturers. In addition, mechanical systems will be more closely coupled to end load. There is a large energy advantage to removing heat directly from the source with water, rather than moving it around in big clouds of air. Cold plate technology and direct water-cooled microprocessors will become more common.

Johnstone: A large and continuing focus for mission critical facilities is providing the infrastructure required to support the increased load and demand due to the increased processing power available in the data center racks. Clients want to provide resilience and capacity, but designers need to be cognizant of overdesign and overprovision at an early stage, so designs need to be able to adapt. This increased demand will continue the focus on design efficiency in the use of materials, plant, and space in addition to true operating costs.

McEnteggart: Twenty years ago, banks, telecom companies, and other major data center owners were primarily focused on ensuring reliability and maintaining high levels of availability, which required a lot of redundant mechanical and electrical systems at a high capital cost. As energy prices have increased, owners are taking a much broader view of the total cost of ownership—the total cost to build and operate a facility and provide IT capability—using new metrics such as “cost per megawatt.”

Gerami: Changes include: Collocation/cloud computing, upward trend of W/sq ft or KW/rack seems to be flattening as more efficient servers are being introduced to market; energy conservation code’s impact on data centers design and operation; and telecommunication facilities upgrading to be data centers—landline phone facilities providing wireless smartphone contents.

CSE: How have cloud data centers affected your work? What trends are you seeing?

Rener: We are seeing less reliance on higher tier levels, multiple generators, and UPS systems, and more focus on flexibility, modularity, and cost-efficient deployments.

Johnstone: With an aim of providing instant access to a shared data network, cloud computing should provide an improved load diversity due to a greater spread of customers requiring different functions at different times. For end users it’s much easier to provide increased cloud storage to an existing system than it is to upgrade an internal data center, so older, less efficient centers will be required to adapt, which should result in more efficient data centers with reduced running costs.

Gerami: Cloud computing’s direct impact on the type of MEP services primarily depends on the client’s business model. This translates in the level of infrastructure redundancy and load density.

Lane: We are seeing higher densities and high load factors in the new cloud-based data center. Loading of generators and the rating of the generator becomes more important, as well as the requirement for Neher-McGrath heating calculations for the underground electrical duct banks. Additionally, based on the type of application and the level of geo-redundancy, we are seeing varying levels of redundancy within a data center. Gone is the time where the entire data center was designed toward a certain tier level. Some applications may not require extremely high levels of reliability. Other applications may require a Tier 3 to Tier 4 topology. The client can save money during the installation and run at lower PUE levels for those applications requiring less reliability.

Menuet: The direct impact has been nominal based on our clients, which are typically brick-and-mortar companies, and the majority of our customers still want control over their data. We see companies creating business in the cloud but not taking business to the cloud—for now.

McEnteggart: With the development of cloud computing, there is a trend toward ensuring reliability and availability using multiple data centers in different geographic locations rather than a single, highly robust data center with redundant mechanical and electrical systems. An owner can effectively build two Tier 2 data centers for far less than the cost of one Tier 4 data center—and when the Tier 2 facilities act as backup to one another, the result is a more reliable system than a single Tier 4. As commissioning agents, this means we may be testing and commissioning a greater number of data centers for a given owner, but spending less time at each because of the lower level of system complexity.

CSE: What type of modeling tools do you use?

Gerami: We use computational fluid dynamics (CFD) software (TileFlow) and energy simulation (Trace 700).

Johnstone: We use a variety of modeling tools such as Autodesk Revit and Revit MEP, SKM Systems Analysis PowerTools, Amtech, Dialux, and IES <VE>. CH2M HILL has developed a bespoke thermal model for overall energy consumption, which contains different system options to allow possible energy savings to be understood. By modeling these values early in the design process, we can better determine a cost-effective cooling strategy, enabling the client to make a more informed and subjective decision at an early stage in the design process.

Lane: We use numerous modeling tools. These tools include the following:

· SKM Systems Analysis PowerTools for power system modeling

· AMPCALC for Neher-McGrath heating calculations

 · Revit/BIM

· NavisWorks

· Autodesk MEP

· Bentley MicroStation

· 3-D Animation

Menuet: We use Tile Flow and FLOW-3-D for CFD; Autodesk AutoCAD MEP, Autodesk Revit MEP, Dapper, and Captor.

Rener: It depends on what you want to model. One of the key modeling tools we use is CFD, which is a key tool for analyzing the effectiveness of cooling in data centers, and for site-wide airflow modeling. We make use of a program (6SigmaDC by Future Facilities) that accurately models a virtual data center using data equipment databases and 3-D modeling for air and heat flows.